基于可视异常检测的内存模块的重建方法试图缩小正常样品的重建误差,同时将其放大为异常样品。不幸的是,现有的内存模块不完全适用于异常检测任务,并且异常样品的重建误差仍然很小。为此,这项工作提出了一种新的无监督视觉异常检测方法,以共同学习有效的正常特征并消除不利的重建错误。具体而言,提出了一个新颖的分区内存库(PMB)模块,以有效地学习和存储具有正常样本语义完整性的详细特征。它开发了一种新的分区机制和一种独特的查询生成方法,以保留上下文信息,然后提高内存模块的学习能力。替代探索了拟议的PMB和跳过连接,以使异常样品的重建更糟。为了获得更精确的异常定位结果并解决了累积重建误差的问题,提出了一个新型的直方图误差估计模块,以通过差异图像的直方图自适应地消除了不利的误差。它可以改善异常本地化性能而不会增加成本。为了评估所提出的异常检测和定位方法的有效性,在三个广泛使用的异常检测数据集上进行了广泛的实验。与基于内存模块的最新方法相比,提出的方法的令人鼓舞的性能证明了其优越性。
translated by 谷歌翻译
在工业应用中,无监督的异常检测是一项艰巨的任务,因为收集足够的异常样品是不切实际的。在本文中,通过共同探索锻造异常样品的有效生成方法和正常样品特征作为分割异常检测的指导信息,提出了一种新颖的自我监督指导性分割框架(SGSF)。具体而言,为确保生成的锻造异常样品有利于模型训练,提出了显着性增强模块(SAM)。 Sam引入了显着图来产生显着性Perlin噪声图,并制定了一种自适应分割策略,以在显着区域产生不规则的掩模。然后,将口罩用于生成伪造的异常样品作为训练的负样本。不幸的是,锻造和真实异常样品之间的分布差距使得基于锻造样品训练的模型难以有效定位真实异常。为此,提出了自我监督的指导网络(SGN)。它利用自我监督的模块提取无噪声的功能,并包含正常的语义信息作为分割模块的先验知识。分割模块具有正常模式段的知识,这些片段与指导特征不同。为了评估SGSF对异常检测的有效性,在三个异常检测数据集上进行了广泛的实验。实验结果表明,SGSF达到了最新的异常检测结果。
translated by 谷歌翻译
弱监督的参考表达接地(REG)旨在将特定目标扎根于语言表达描述的图像中,同时缺乏目标和表达之间的对应关系。弱监督的REG存在两个主要问题。首先,缺乏区域级注释会引入建议和查询之间的歧义。其次,大多数以前的弱监督的REG方法忽略了指南的判别位置和上下文,从而在将目标与其他相同类别对象区分开时造成了困难。为了应对上述挑战,我们设计了实体增强的自适应重建网络(enail)。具体而言,赚取包括三个模块:实体增强,自适应接地和协作重建。在实体增强中,我们计算语义相似性作为监督选择候选建议。自适应接地可以在主题,位置和背景下以分层关注计算候选提案的排名评分。协作重建从三个角度衡量排名结果:自适应重建,语言重建和属性分类。自适应机制有助于减轻不同参考表达式的差异。五个数据集的实验表明,赚取胜于现有的最新方法。定性结果表明,提议的收入可以更好地处理特定类别的多个对象在一起的情况。
translated by 谷歌翻译
冻结预训练的主链已成为标准范式,以避免在几次分段中过度拟合。在本文中,我们重新考虑范式并探索一个新的制度:{\ em对骨干中的一小部分参数}进行微调。我们提出了一种解决过度拟合问题的解决方案,从而使学习新颖班级的模型概括更好。我们的方法通过奇异值分解(SVD)将主链参数分解为三个连续的矩阵,然后{\ em仅微调单数值}并保持其他冻结。上面的设计使模型可以在新颖类中调整特征表示,同时在预先训练的主链中保持语义线索。我们在具有不同骨架的各种几种射击分割方法上评估了{\ em单数值微调(SVF)}方法。我们在Pascal-5 $^i $和Coco-20 $^i $上都获得了最先进的结果。希望这个简单的基准将鼓励研究人员重新考虑骨干微调在几次环境中的作用。源代码和模型将在\ url {https://github.com/syp2ysy/svf}上获得。
translated by 谷歌翻译
基于弱监管的像素 - 明显的密集预测任务当前使用类注意映射(CAM)以产生伪掩模作为地面真理。然而,现有方法通常取决于诱人的训练模块,这可能会引入磨削计算开销和复杂的培训程序。在这项工作中,提出了语义结构知识推断(SSA)来探索隐藏在基于CNN的网络的不同阶段的语义结构信息,以在模型推断中产生高质量凸轮。具体地,首先提出语义结构建模模块(SSM)来生成类别不可知语义相关表示,其中每个项目表示一个类别对象和所有其他类别之间的亲和程度。然后,探索结构化特征表示通过点产品操作来抛光不成熟的凸轮。最后,来自不同骨架级的抛光凸轮融合为输出。所提出的方法具有没有参数的优点,不需要培训。因此,它可以应用于广泛的弱监管像素 - 明智的密集预测任务。对弱势监督对象本地化和弱监督语义分割任务的实验结果证明了该方法的效力,这使得新的最先进的结果实现了这两项任务。
translated by 谷歌翻译
由于其在计算和存储的效率,散列广泛应用于大型多媒体数据上的多模式检索。在本文中,我们提出了一种用于可伸缩图像文本和视频文本检索的新型深度语义多模式散列网络(DSMHN)。所提出的深度散列框架利用2-D卷积神经网络(CNN)作为骨干网络,以捕获图像文本检索的空间信息,而3-D CNN作为骨干网络以捕获视频的空间和时间信息 - 文本检索。在DSMHN中,通过显式保留帧间性相似性和岩石性语义标签,共同学习两组模态特定散列函数。具体地,假设学习散列代码应该是对分类任务的最佳选择,通过在所得哈希代码上嵌入语义标签来共同训练两个流网络以学习散列函数。此外,提出了一种统一的深层多模式散列框架,通过利用特征表示学习,互相相似度 - 保存学习,语义标签保留学习和哈希函数学习同时利用不同类型的损耗功能来学习紧凑和高质量的哈希码。该提议的DSMHN方法是用于图像文本和视频文本检索的通用和可扩展的深度散列框架,其可以灵活地集成在不同类型的损耗功能中。我们在四个广泛使用的多媒体检索数据集中对单一模态和跨模型检索任务进行广泛的实验。图像文本和视频文本检索任务的实验结果表明DSMHN显着优于最先进的方法。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译